首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   37417篇
  免费   1652篇
  国内免费   60篇
工业技术   39129篇
  2023年   222篇
  2022年   193篇
  2021年   767篇
  2020年   542篇
  2019年   691篇
  2018年   873篇
  2017年   792篇
  2016年   926篇
  2015年   843篇
  2014年   1129篇
  2013年   2516篇
  2012年   1808篇
  2011年   2210篇
  2010年   1747篇
  2009年   1629篇
  2008年   1870篇
  2007年   1822篇
  2006年   1634篇
  2005年   1483篇
  2004年   1205篇
  2003年   1145篇
  2002年   1068篇
  2001年   722篇
  2000年   566篇
  1999年   614篇
  1998年   632篇
  1997年   603篇
  1996年   517篇
  1995年   579篇
  1994年   528篇
  1993年   511篇
  1992年   496篇
  1991年   291篇
  1990年   417篇
  1989年   391篇
  1988年   321篇
  1987年   356篇
  1986年   311篇
  1985年   420篇
  1984年   418篇
  1983年   318篇
  1982年   295篇
  1981年   286篇
  1980年   268篇
  1979年   270篇
  1978年   247篇
  1977年   228篇
  1976年   214篇
  1975年   194篇
  1974年   174篇
排序方式: 共有10000条查询结果,搜索用时 390 毫秒
991.
Using Wang–Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse “transition” to a globule state followed by a second “transition” into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of “transitions”. These transitions depend upon the relative interaction strengths and are largely inaccessible to “standard” Monte Carlo methods.  相似文献   
992.
Map information for drivers is usually presented in an allocentric-topographic form (as with printed maps) or in an egocentric-schematic form (as with road signs). The advent of new variable message boards on UK motorways raises the possibility of presenting road maps to reflect congestion ahead. Should these maps be allocentric-topographic or egocentric-schematic? This was assessed in an eye tracking study, with participants viewing maps of a motorway network in order to identify whether any congestion was relevant to their intended route. The schematic-egocentric maps were responded to most accurately with shorter fixation durations suggesting easier processing. In particular, the driver's entrance and intended exit from the map were attended to more in the allocentric maps. Individual differences in mental rotation ability also seem to contribute to poor performance on allocentric maps. The results favour schematic-egocentric maps for roadside congestion information, but also provide theoretical insights into map-rotation and individual differences. Statement of Relevance: This study informs designers and policy makers about optimum representations of traffic congestion on roadside variable message signs and, furthermore, demonstrates that individual differences contribute to problems with processing certain sign types. Schematic-egocentric representations of a motorway network produced the best results, as noted in behavioural and eye movement measures.  相似文献   
993.
The problem tackled in this article consists in associating perceived objects detected at a certain time with known objects previously detected, knowing uncertain and imprecise information regarding the association of each perceived objects with each known objects. For instance, this problem can occur during the association step of an obstacle tracking process, especially in the context of vehicle driving aid. A contribution in the modeling of this association problem in the belief function framework is introduced. By interpreting belief functions as weighted opinions according to the Transferable Belief Model semantics, pieces of information regarding the association of known objects and perceived objects can be expressed in a common global space of association to be combined by the conjunctive rule of combination, and a decision making process using the pignistic transformation can be made. This approach is validated on real data.  相似文献   
994.
We give polynomial-time, deterministic randomness extractors for sources generated in small space, where we model space s sources on {0,1}n as sources generated by width 2s branching programs. Specifically, there is a constant η>0 such that for any ζ>n?η, our algorithm extracts m=(δ?ζ)n bits that are exponentially close to uniform (in variation distance) from space s sources with min-entropy δn, where s=Ω(ζ3n). Previously, nothing was known for δ?1/2, even for space 0. Our results are obtained by a reduction to the class of total-entropy independent sources. This model generalizes both the well-studied models of independent sources and symbol-fixing sources. These sources consist of a set of r independent smaller sources over {0,1}?, where the total min-entropy over all the smaller sources is k. We give deterministic extractors for such sources when k is as small as polylog(r), for small enough ?.  相似文献   
995.
Remotely sensed vegetation indices are widely used to detect greening and browning trends; especially the global coverage of time-series normalized difference vegetation index (NDVI) data which are available from 1981. Seasonality and serial auto-correlation in the data have previously been dealt with by integrating the data to annual values; as an alternative to reducing the temporal resolution, we apply harmonic analyses and non-parametric trend tests to the GIMMS NDVI dataset (1981-2006). Using the complete dataset, greening and browning trends were analyzed using a linear model corrected for seasonality by subtracting the seasonal component, and a seasonal non-parametric model. In a third approach, phenological shift and variation in length of growing season were accounted for by analyzing the time-series using vegetation development stages rather than calendar days. Results differed substantially between the models, even though the input data were the same. Prominent regional greening trends identified by several other studies were confirmed but the models were inconsistent in areas with weak trends. The linear model using data corrected for seasonality showed similar trend slopes to those described in previous work using linear models on yearly mean values. The non-parametric models demonstrated the significant influence of variations in phenology; accounting for these variations should yield more robust trend analyses and better understanding of vegetation trends.  相似文献   
996.
The National Land Cover Database (NLCD) 2001 Alaska land cover classification is the first 30-m resolution land cover product available covering the entire state of Alaska. The accuracy assessment of the NLCD 2001 Alaska land cover classification employed a geographically stratified three-stage sampling design to select the reference sample of pixels. Reference land cover class labels were determined via fixed wing aircraft, as the high resolution imagery used for determining the reference land cover classification in the conterminous U.S. was not available for most of Alaska. Overall thematic accuracy for the Alaska NLCD was 76.2% (s.e. 2.8%) at Level II (12 classes evaluated) and 83.9% (s.e. 2.1%) at Level I (6 classes evaluated) when agreement was defined as a match between the map class and either the primary or alternate reference class label. When agreement was defined as a match between the map class and primary reference label only, overall accuracy was 59.4% at Level II and 69.3% at Level I. The majority of classification errors occurred at Level I of the classification hierarchy (i.e., misclassifications were generally to a different Level I class, not to a Level II class within the same Level I class). Classification accuracy was higher for more abundant land cover classes and for pixels located in the interior of homogeneous land cover patches.  相似文献   
997.
Schema integration aims to create a mediated schema as a unified representation of existing heterogeneous sources sharing a common application domain. These sources have been increasingly written in XML due to its versatility and expressive power. Unfortunately, these sources often use different elements and structures to express the same concepts and relations, thus causing substantial semantic and structural conflicts. Such a challenge impedes the creation of high-quality mediated schemas and has not been adequately addressed by existing integration methods. In this paper, we propose a novel method, named XINTOR, for automating the integration of heterogeneous schemas. Given a set of XML sources and a set of correspondences between the source schemas, our method aims to create a complete and minimal mediated schema: it completely captures all of the concepts and relations in the sources without duplication, provided that the concepts do not overlap. Our contributions are fourfold. First, we resolve structural conflicts inherent in the source schemas. Second, we introduce a new statistics-based measure, called path cohesion, for selecting concepts and relations to be a part of the mediated schema. The path cohesion is statistically computed based on multiple path quality dimensions such as average path length and path frequency. Third, we resolve semantic conflicts by augmenting the semantics of similar concepts with context-dependent information. Finally, we propose a novel double-layered mediated schema to retain a wider range of concepts and relations than existing mediated schemas, which are at best either complete or minimal, but not both. Performed on both real and synthetic datasets, our experimental results show that XINTOR outperforms existing methods with respect to (i) the mediated-schema quality using precision, recall, F-measure, and schema minimality; and (ii) the execution performance based on execution time and scale-up performance.  相似文献   
998.
We introduce a fully automatic algorithm which optimizes the high‐level structure of a given quadrilateral mesh to achieve a coarser quadrangular base complex. Such a topological optimization is highly desirable, since state‐of‐the‐art quadrangulation techniques lead to meshes which have an appropriate singularity distribution and an anisotropic element alignment, but usually they are still far away from the high‐level structure which is typical for carefully designed meshes manually created by specialists and used e.g. in animation or simulation. In this paper we show that the quality of the high‐level structure is negatively affected by helical configurations within the quadrilateral mesh. Consequently we present an algorithm which detects helices and is able to remove most of them by applying a novel grid preserving simplification operator (GP‐operator) which is guaranteed to maintain an all‐quadrilateral mesh. Additionally it preserves the given singularity distribution and in particular does not introduce new singularities. For each helix we construct a directed graph in which cycles through the start vertex encode operations to remove the corresponding helix. Therefore a simple graph search algorithm can be performed iteratively to remove as many helices as possible and thus improve the high‐level structure in a greedy fashion. We demonstrate the usefulness of our automatic structure optimization technique by showing several examples with varying complexity.  相似文献   
999.
This paper proposes a complete framework to assess the overall performance of classification models from a user perspective in terms of accuracy, comprehensibility, and justifiability. A review is provided of accuracy and comprehensibility measures, and a novel metric is introduced that allows one to measure the justifiability of classification models. Furthermore, taxonomy of domain constraints is introduced, and an overview of the existing approaches to impose constraints and include domain knowledge in data mining techniques is presented. Finally, justifiability metric is applied to a credit scoring and customer churn prediction case.  相似文献   
1000.
The main focus of this paper is a pair of new approximation algorithms for certain integer programs. First, for covering integer programs {min cx:Axb,0xd} where A has at most k nonzeroes per row, we give a k-approximation algorithm. (We assume A,b,c,d are nonnegative.) For any k≥2 and ε>0, if P≠NP this ratio cannot be improved to k−1−ε, and under the unique games conjecture this ratio cannot be improved to kε. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsack-cover inequalities. Second, for packing integer programs {max cx:Axb,0xd} where A has at most k nonzeroes per column, we give a (2k 2+2)-approximation algorithm. Our approach builds on the iterated LP relaxation framework. In addition, we obtain improved approximations for the second problem when k=2, and for both problems when every A ij is small compared to b i . Finally, we demonstrate a 17/16-inapproximability for covering integer programs with at most two nonzeroes per column.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号